Attractor Networks
نویسنده
چکیده
Artificial neural networks (ANNs), sometimes referred to as connectionist networks, are computational models based loosely on the neural architecture of the brain. Over the past twenty years, ANNs have proven to be a fruitful framework for modeling many aspects of cognition, including perception, attention, learning and memory, language, and executive control. A particular type of ANN, called an attractor network, is central to computational theories of consciousness, because attractor networks can be analyzed in terms of properties—such as temporal stability, and strength, quality, and discreteness of representation— that have been ascribed to conscious states. Some theories have gone so far as to posit that attractor nets are the computational substrate from which conscious states arise.
منابع مشابه
Localist Attractor Networks Submitted to: Neural Computation
Attractor networks, which map an input space to a discrete output space, are useful for pattern completion—cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection ...
متن کاملLocalist Attractor Networks
Attractor networks, which map an input space to a discrete output space, are useful for pattern completion--cleaning up noisy or missing input features. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious attractors and ill-conditioned attractor basins. These difficulties occur because each connection...
متن کاملLearning in sparse attractor networks with inhibition
Attractor networks are important models for brain functions on a behavioral and physiological level, but learning on sparse patterns has not been fully explained. Here we show that the inclusion of the activity dependent effect of an inhibitory pool in Hebbian learning can accomplish learning of stable sparse attractors in both, continuous attractor and point attractor neural networks.
متن کاملAttractor Equivalence: An Observational Semantics for Reaction Networks
We study observational semantics for networks of chemical reactions as used in systems biology. Reaction networks without kinetic information, as we consider, can be identified with Petri nets. We present a new observational semantics for reaction networks that we call the attractor equivalence. The main idea of the attractor equivalence is to observe reachable attractors and reachability of an...
متن کاملAttractor Density Models with Application to Analyzing the Stability of Biological Neural Networks
An attractor modeling algorithm is introduced which draws upon techniques found in nonlineax dynamics and pattern recognition. The technique is motivated. by the need for quantitative measures that are able to assess the stability of biological neural networks which utilize nonlinear dynamics to process information. Attractor Density Models with Application to Analyzing the Stability of Biologi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2000